翻訳と辞書
Words near each other
・ Winnipesaukee Playhouse
・ Winnipesaukee River
・ Winnisimmet Street Railway
・ Winnisook Club
・ Winnisook Lake
・ Winnisquam
・ Winnisquam Lake
・ Winnisquam Regional High School
・ Winnisquam, New Hampshire
・ Winnit Club
・ Winnitoba railway station
・ Winnoc
・ Winnogóra
・ Winnold House
・ Winnona Park Historic District
Winnow (algorithm)
・ Winnowie
・ Winnowing
・ Winnowing barn
・ Winnowing Basket (Chinese constellation)
・ Winnowing Oar
・ WinnowTag
・ Winnport
・ Winnsboro
・ Winnsboro High School
・ Winnsboro Historic District
・ Winnsboro Independent School District
・ Winnsboro Mills, South Carolina
・ Winnsboro, Louisiana
・ Winnsboro, South Carolina


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Winnow (algorithm) : ウィキペディア英語版
Winnow (algorithm)
The winnow algorithm〔
Nick Littlestone (1988). "Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm", ''( Machine Learning 285–318(2) ).〕 is a technique from machine learning for learning a linear classifier from labeled examples. It is very similar to the perceptron algorithm. However, the perceptron algorithm uses an additive weight-update scheme, while Winnow uses a multiplicative scheme that allows it to perform much better when many dimensions are irrelevant (hence its name). It is a simple algorithm that scales well to high-dimensional data. During training, Winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane that can then be used to label novel examples as positive or negative. The algorithm can also be used in the online learning setting, where the learning and the classification phase are not clearly separated.
== Algorithm ==

The basic algorithm, Winnow1, is as follows. The instance space is X=\^n, that is, each instance is described as a set of Boolean-valued features. The algorithm maintains non-negative weights w_i for i\in \, which are initially set to 1, one weight for each feature. When the learner is given an example (x_1,...x_n), it applies the typical prediction rule for linear classifiers:
* If \sum_^n w_i x_i > \Theta , then predict 1
* Otherwise predict 0
Here \Theta is a real number that is called the ''threshold''. Together with the weights, the threshold defines a dividing hyperplane in the instance space. Good bounds are obtained if \Theta=n/2 (see below).
For each example with which it is presented, the learner applies the following update rule:
* If an example is correctly classified, do nothing.
* If an example is predicted to be 1 but the correct result was 0, all of the weights implicated in the mistake are set to 0 (demotion step).
* If an example is predicted to be 0 but the correct result was 1, all of the weights implicated in the mistake are multiplied by \alpha (promotion step).
Here, "implicated" means weights on features of the instance that have value 1. A typical value for \alpha is 2.
There are many variations to this basic approach. ''Winnow2''〔 is similar except that in the demotion step the weights are divided by \alpha instead of being set to 0. ''Balanced Winnow'' maintains two sets of weights, and thus two hyperplanes. This can then be generalized for multi-label classification.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Winnow (algorithm)」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.